52 research outputs found

    Device and Circuit Architectures for In‐Memory Computing

    Get PDF
    With the rise in artificial intelligence (AI), computing systems are facing new challenges related to the large amount of data and the increasing burden of communication between the memory and the processing unit. In‐memory computing (IMC) appears as a promising approach to suppress the memory bottleneck and enable higher parallelism of data processing, thanks to the memory array architecture. As a result, IMC shows a better throughput and lower energy consumption with respect to the conventional digital approach, not only for typical AI tasks, but also for general‐purpose problems such as constraint satisfaction problems (CSPs) and linear algebra. Herein, an overview of IMC is provided in terms of memory devices and circuit architectures. First, the memory device technologies adopted for IMC are summarized, focusing on both charge‐based memories and emerging devices relying on electrically induced material modification at the chemical or physical level. Then, the computational memory programming and the corresponding device nonidealities are described with reference to offline and online training of IMC circuits. Finally, array architectures for computing are reviewed, including typical architectures for neural network accelerators, content addressable memory (CAM), and novel circuit topologies for general‐purpose computing with low complexity

    One Step in-Memory Solution of Inverse Algebraic Problems

    Get PDF
    AbstractMachine learning requires to process large amount of irregular data and extract meaningful information. Von-Neumann architecture is being challenged by such computation, in fact a physical separation between memory and processing unit limits the maximum speed in analyzing lots of data and the majority of time and energy are spent to make information travel from memory to the processor and back. In-memory computing executes operations directly within the memory without any information travelling. In particular, thanks to emerging memory technologies such as memristors, it is possible to program arbitrary real numbers directly in a single memory device in an analog fashion and at the array level, execute algebraic operation in-memory and in one step. In this chapter the latest results in accelerating inverse operation, such as the solution of linear systems, in-memory and in a single computational cycle will be presented

    In-memory eigenvector computation in time O(1)

    Get PDF
    In-memory computing with crosspoint resistive memory arrays has gained enormous attention to accelerate the matrix-vector multiplication in the computation of data-centric applications. By combining a crosspoint array and feedback amplifiers, it is possible to compute matrix eigenvectors in one step without algorithmic iterations. In this work, time complexity of the eigenvector computation is investigated, based on the feedback analysis of the crosspoint circuit. The results show that the computing time of the circuit is determined by the mismatch degree of the eigenvalues implemented in the circuit, which controls the rising speed of output voltages. For a dataset of random matrices, the time for computing the dominant eigenvector in the circuit is constant for various matrix sizes, namely the time complexity is O(1). The O(1) time complexity is also supported by simulations of PageRank of real-world datasets. This work paves the way for fast, energy-efficient accelerators for eigenvector computation in a wide range of practical applications.Comment: Accepted by Adv. Intell. Sys

    Time complexity of in-memory solution of linear systems

    Get PDF
    In-memory computing with crosspoint resistive memory arrays has been shown to accelerate data-centric computations such as the training and inference of deep neural networks, thanks to the high parallelism endowed by physical rules in the electrical circuits. By connecting crosspoint arrays with negative feedback amplifiers, it is possible to solve linear algebraic problems such as linear systems and matrix eigenvectors in just one step. Based on the theory of feedback circuits, we study the dynamics of the solution of linear systems within a memory array, showing that the time complexity of the solution is free of any direct dependence on the problem size N, rather it is governed by the minimal eigenvalue of an associated matrix of the coefficient matrix. We show that, when the linear system is modeled by a covariance matrix, the time complexity is O(logN) or O(1). In the case of sparse positive-definite linear systems, the time complexity is solely determined by the minimal eigenvalue of the coefficient matrix. These results demonstrate the high speed of the circuit for solving linear systems in a wide range of applications, thus supporting in-memory computing as a strong candidate for future big data and machine learning accelerators.Comment: Accepted by IEEE Trans. Electron Devices. The authors thank Scott Aaronson for helpful discussion about time complexit

    Brain‐Inspired Structural Plasticity through Reweighting and Rewiring in Multi‐Terminal Self‐Organizing Memristive Nanowire Networks

    Get PDF
    Acting as artificial synapses, two‐terminal memristive devices are considered fundamental building blocks for the realization of artificial neural networks. Current memristive crossbar architectures demonstrate the implementation of neuromorphic computing paradigms, although they are unable to emulate typical features of biological neural networks such as high connectivity, adaptability through reconnection and rewiring, and long‐range spatio‐temporal correlation. Herein, self‐organizing memristive random nanowire (NW) networks with functional connectivity able to display homo‐ and heterosynaptic plasticity is reported thanks to the mutual electrochemical interaction among memristive NWs and NW junctions. In particular, it is shown that rewiring and reweighting effects observed in single NWs and single NW junctions, respectively, are responsible for structural plasticity of the network under electrical stimulation. Such biologically inspired systems allow a low‐cost realization of neural networks that can learn and adapt when subjected to multiple external stimuli, emulating the experience‐dependent synaptic plasticity that shape the connectivity and functionalities of the nervous system that can be exploited for hardware implementation of unconventional computing paradigms

    Brain‐Inspired Structural Plasticity through Reweighting and Rewiring in Multi‐Terminal Self‐Organizing Memristive Nanowire Networks

    Get PDF
    open8sìActing as artificial synapses, two‐terminal memristive devices are considered fundamental building blocks for the realization of artificial neural networks. Current memristive crossbar architectures demonstrate the implementation of neuromorphic computing paradigms, although they are unable to emulate typical features of biological neural networks such as high connectivity, adaptability through reconnection and rewiring, and long‐range spatio‐temporal correlation. Herein, self‐organizing memristive random nanowire (NW) networks with functional connectivity able to display homo‐ and heterosynaptic plasticity is reported thanks to the mutual electrochemical interaction among memristive NWs and NW junctions. In particular, it is shown that rewiring and reweighting effects observed in single NWs and single NW junctions, respectively, are responsible for structural plasticity of the network under electrical stimulation. Such biologically inspired systems allow a low‐cost realization of neural networks that can learn and adapt when subjected to multiple external stimuli, emulating the experience‐dependent synaptic plasticity that shape the connectivity and functionalities of the nervous system that can be exploited for hardware implementation of unconventional computing paradigms.openGianluca Milano; Giacomo Pedretti; Matteo Fretto; Luca Boarino; Fabio Benfenati; Daniele Ielmini; Ilia Valov; Carlo RicciardiMilano, Gianluca; Pedretti, Giacomo; Fretto, Matteo; Boarino, Luca; Benfenati, Fabio; Ielmini, Daniele; Valov, Ilia; Ricciardi, Carl

    Memristive neural network for on-line learning and tracking with brain-inspired spike timing dependent plasticity

    Get PDF
    Brain-inspired computation can revolutionize information technology by introducing machines capable of recognizing patterns (images, speech, video) and interacting with the external world in a cognitive, humanlike way. Achieving this goal requires first to gain a detailed understanding of the brain operation, and second to identify a scalable microelectronic technology capable of reproducing some of the inherent functions of the human brain, such as the high synaptic connectivity (~104) and the peculiar time-dependent synaptic plasticity. Here we demonstrate unsupervised learning and tracking in a spiking neural network with memristive synapses, where synaptic weights are updated via brain-inspired spike timing dependent plasticity (STDP). The synaptic conductance is updated by the local time-dependent superposition of pre-and post-synaptic spikes within a hybrid one-transistor/one-resistor (1T1R) memristive synapse. Only 2 synaptic states, namely the low resistance state (LRS) and the high resistance state (HRS), are sufficient to learn and recognize patterns. Unsupervised learning of a static pattern and tracking of a dynamic pattern of up to 4 Ã\u97 4 pixels are demonstrated, paving the way for intelligent hardware technology with up-scaled memristive neural networks
    • 

    corecore